38 research outputs found

    A Unique New Antenna Technology for Small (And Large) Satellites

    Get PDF
    The application of large antennas in spacecraft is often limited by available volume, as well as by the more usual mass limitation. Shroud dimensions usually determine the maximum aperture which can be carried without resorting to complex and potentially unreliable unfurling mechanisms. This applies all the more in a small-satellite environment with the smaller available launch volumes and severe mass limits of this species. FLAPS (Flat Parabolic Surface) is a newly developed technology for RF reflector surfaces which frees the spacecraft designer from the packaging rigidity of the common parabolic dish. It offers the ability to essentially duplicate the capability of a parabolic reflector in a reflector of almost any shape. The surface is shaped electrically rather than physically, in much the same manner as in a phased array, but by a totally passive array of dipoles suspended above a conductive ground plane. The dipoles are sized and spaced for the particular frequency and feed arrangement desired, and can produce a beam of essentially any desired shape. The FLAPS technology is applicable across the microwave and millimeter-wave spectrum. FLAPS reflectors have been built and tested at 2, 6, 16, 36, and 95 GHz. as well as at various other frequencies in this range. The technology lends itself to a variety of fabrication methods, which can be highly automated

    Complexity of Network Reliability and Optimal Database Placement Problems

    Get PDF
    A fundamental problem of distributed database design in an existing network where components can fail is finding an optimal location at which to place the database in a centralized system or copies of each data item in a decentralized or replicated system. In this paper it is proved for the first time exactly how hard this placement problem is under the measure of data availability. Specifically, we show that the optimal placement problem for availability is #P- complete, a measure of intractability at least as severe as NP-completeness. Given the anticipated computational difficulty of finding an exact solution, we go on to describe an effective, practical method for approximating the optimal copy placement. To obtain these results, we model the environment in which a distributed database operates by a probabilistic graph, which is a set of fully-reliable vertices representing sites, and a set of edges representing communication links, each operational with a rational probability. We prove that finding the optimal copy placement in a probabilistic graph is #P-complete by giving a sequence of reductions from #Satisfiability. We generalize this result to networks in which each site and each link has an independent, rational operational probability and to networks in which all the sites or all the links have a fixed, uniform operational probabilities

    A Tight Upper Bound on the Benefits of Replication and Consistency Control Protocols

    Get PDF
    We present an upper bound on the performance provided by a protocol guaranteeing mutually exclusive access to a replicated resource in a network subject to component failure and subsequent partitioning. The bound is presented in terms of the performance of a single resource in the same network. The bound is tight and is the first such bound known to us. Since mutual exclusion is one of the requirements for maintaining the consistency of a database object, this bound provides an upper limit on the availability provided by any database consistency control protocol, including those employing dynamic data relocation and replication. We show that if a single copy provides availability A for 0 \u3c= A \u3c= 1, then no scheme can achieve availability greater than sqrt(A) in the same network. We show this bound to be the best possible for any network with availability greater than .25. Although, as we proved, the problem of calculating A is #P-complete, we describe a method for approximating the optimal location for a single copy which adjusts dynamically to current network characteristcs. This bound is most useful for high availabilities, which tend to be obtainable with modern networks and their constituent components

    Availability Issues in Data Replication in Distributed Database

    Get PDF
    Replication of data at more than one site in a distributed database has been reported to increase the availability in data in systems where sites and links are subject to failure. We have shown in results summarized in this paper that in many interesting cases the advantage is slight. A well-placed single copy is available to transactions almost as much of the time as is correct replicated data no matter how ingeniously it is managed. We explain these findings in terms of the behavior of the partitions that form in networks where components fail. We also show that known and rather simple protocols for the maintenance of multiple copies are essentially best possible by comparing them against an unrealizable protocol that knows the future. We complete our study of these questions by reporting that while computing the availability of data is #P-complete, nonetheless there is a tight analytical bound on the amount replication can improve over a well-located single copy. We close with some observations regarding system design motivated by this work

    Effects of Replication on Data Availability

    Get PDF
    In this paper we examine the effects of replication on the availability of data in a large network. This analysis differs from previous analyses in that it compares the performance of a dynamic consistency control protocol not only to that of other consistency control protocols, but also to the performance of non-replication and to an upper bound on data availability. This analysis also differes in that we gather extensive simulations on large networks subject to partitions at realistically high component reliabilities. We examine the dynamic consistency protocol presented by Jajodia and Mutchler [9, 12] and by Long and Paris[18] along with two proposed enhancements to this protocol[10,11]. We study networks of 101 sites and up to 5050 links (fully-connected) in which all components, although highly reliable, are subject to failure. We demonstrate the importance in this realistic environment of an oft neglected parameter of the system model, the ratio of transaction submissions to component failures. We also show the impact of the number of copies on both the protocol performance and the potential of replicaion as measured by the upper bound. Our simulations show that the majority of current protocol performs optimally for topologies that yield availabilities of at least 65%. On the other hand, the availability provided by non-replicaion is inferior to that of the majority of current protocol by a most 5.9 percentage points for these same topologies. At this point of maximum difference, theprimary copy protocol yields availability 59.1% and the majority of current protocol yields availability 65.0%. We discuss the characteristics of the model limiting the performance of replication

    Effects of Replication on the Duration of Failure in Distributed Databases

    Get PDF
    Replicating data objects has been suggested as a means of increasing the performance of a distributed database system in a network subject to link and site failures. Since a network may partition as a consequence of such failures, a data object may become unavailable from a given site for some period of time. In this paper we study duration failure, which we define as the length of time, once the object becomes unavailable from a particular site, that the object remains unavailable. We show that, for networks composed of highly-reliable components, replication does not substantially reduce the duration of failure. We model a network as a collection of sites and links, each failing and recovering independently according to a Poisson process. Using this model, we demonstrate via simulation that the duration of failure incurred using a non-replicated data object is nearly as short as that incurred using a replicated object and a replication control protocol, including an unrealizable protocol which is optimal with respect to availability. We then examine analytically a simplified system in which the sites but not the links are subject to failure. We prove that if each site operates with probability p, then the optimal replication protocol, Available Copies [5,26], reduces the duration of failure by at most a factor of 1-p/1+p. Lastly, we present bounds for general systems, those in which both the sites and the communications between the sites may fail. We prove, for example, that if sites are 95% reliable and a communications failure is sufficiently short (either infallible or satisfying a function specified in the paper) then replication can improve the duration of failure by at most 2.7% of that experienced using a single copy. These results show that replication has only a small effect of the duration of failure in present-day partitionable networks comprised of realistically reliable components

    Finding Optimal Quorum Assigments for Distributed Databases

    Get PDF
    Replication has been studied as a method of increasing the availability of a data item in a distributed database subject to component failures and consequent partitioning. The potential for partitioning requires that a protocol be employed which guarantees that any access to a data item is aware of the most recent update to that data item. By minimizing the number of access requests denied due to this constraint, we maximize availability. In the event that all access requests are reads, placing one copy of the data item at each site clearly leads to maximum availability. The other extreme, all access requests are write requests or are treated as such, has been studied extensively in the literature. In this paper we investigate the performance of systems with both read and write requests. We describe a distributed on-line algorithm for determining the optimal parameters, or optimal quorum assignments, for a commonly studied protocol, the quorum consensus protocol[9]. We also show how to incorporate these optimization techniques into a dynamic quorum reassignment protocol. In addition, we demonstrate via simulation both the value of this algorithm and the effect of various read-write rations on availability. This simulation, on 101 sites and up to 5050 links(fully- connected), demonstrates that the techniques described here can greatly increase data availability, and that the best quorum assignments are frequently realized at the extreme values of the quorum parameters

    A Comparison of Consistency Control Protocols

    Get PDF
    In this paper we analyze three protocols for maintaining the mutual consistency of replicated objects in a distributed computing environment and compare their performance with that of an oracle protocol whose performance is optimal. We examine these protocols, two dynamic protocols and the majority consensus protocol, via simulations using two measures of availability. The analysis shows that the dynamic protocols, under realistic assumptions, do not perform significantly better than the static voting scheme. Finally we demonstrate that none of these approaches perform as well as our oracle protocol which is shown to be an upper bound on availability

    Arguments for and against the legalization of abortion

    Get PDF
    Free association to the word “abortion� would probably yield a fantastic array of emotional responses: pain, relief, murder, crime, fear, freedom, genocide, guilt, sin. Which of these associations people have no doubt reflects their age, marital status, religion or nationality. To a thirty-five-year-old feminist, the primary response might be “freedom� and “relief�; to an unmarried American college girl, “fear� and “pain�; to a catholic priest, “murder� and “sin�; to some black militants, “genocide.� As a result of the Supreme Court rulings in Roe v. Wade and Doe v. Bolton, 93 S. Ct. 705 (1973), every woman in the United States has the same right to abortion during the first three months of pregnancy as she has to any other minor surgery. These rulings have been received differently throughout society/ While abortion proponents have viewed the rulings with exhilaration, pro-life advocates consider the decision a monumental error which will result in chaos. The purpose of this paper was to explore the arguments for and against the legalization of abortion. This study includes an analysis of the Supreme Court rulings on abortion and the definitions, assumptions, and perspectives that abortion proponents and pro-life advocates have internalized in defense of their diametrically opposed views toward abortion. This study will include the judicial developments which preceded the Supreme Court rulings. The affirming and dissenting opinions of the Supreme Court Justices will be discussed with emphasis being placed on the basic principles represented by those opinions. A review of the literature was the major procedure used to gather information. Data extracted from the Supreme Court rulings, professional journals, governments, periodicals and private organization decision papers provided background information and contemporary thought upon which an objective analysis could be based

    A critical review of the research literature on Six Sigma, Lean and StuderGroup's Hardwiring Excellence in the United States: the need to demonstrate and communicate the effectiveness of transformation strategies in healthcare

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>U.S. healthcare organizations are confronted with numerous and varied transformational strategies promising improvements along all dimensions of quality and performance. This article examines the peer-reviewed literature from the U.S. for evidence of effectiveness among three current popular transformational strategies: Six Sigma, Lean/Toyota Production System, and Studer's Hardwiring Excellence.</p> <p>Methods</p> <p>The English language health, healthcare management, and organizational science literature (up to December 2007) indexed in Medline, Web of Science, ABI/Inform, Cochrane Library, CINAHL, and ERIC was reviewed for studies on the aforementioned transformation strategies in healthcare settings. Articles were included if they: appeared in a peer-reviewed journal; described a specific intervention; were not classified as a pilot study; provided quantitative data; and were not review articles. Nine references on Six Sigma, nine on Lean/Toyota Production System, and one on StuderGroup meet the study's eligibility criteria.</p> <p>Results</p> <p>The reviewed studies universally concluded the implementations of these transformation strategies were successful in improving a variety of healthcare related processes and outcomes. Additionally, the existing literature reflects a wide application of these transformation strategies in terms of both settings and problems. However, despite these positive features, the vast majority had methodological limitations that might undermine the validity of the results. Common features included: weak study designs, inappropriate analyses, and failures to rule out alternative hypotheses. Furthermore, frequently absent was any attention to changes in organizational culture or substantial evidence of lasting effects from these efforts.</p> <p>Conclusion</p> <p>Despite the current popularity of these strategies, few studies meet the inclusion criteria for this review. Furthermore, each could have been improved substantially in order to ensure the validity of the conclusions, demonstrate sustainability, investigate changes in organizational culture, or even how one strategy interfaced with other concurrent and subsequent transformation efforts. While informative results can be gleaned from less rigorous studies, improved design and analysis can more effectively guide healthcare leaders who are motivated to transform their organizations and convince others of the need to employ such strategies. Demanding more exacting evaluation of projects consultants, or partnerships with health management researchers in academic settings, can support such efforts.</p
    corecore